On the Convergence of Inexact Gradient Descent with Controlled Synchronization Steps

نویسندگان

چکیده

We develop a gradient-like algorithm to minimize sum of peer objective functions based on coordination through interconnection network. The admits two stages: the first is constitute gradient, possibly with errors, for updating locally replicated decision variables at each and second used error-free averaging synchronizing local replicas. Unlike many related algorithms, errors permitted in our can cover wide range inexactnesses, as long they are bounded. Moreover, we do not impose any gradient boundedness conditions functions. Furthermore, stage conducted periodic manner, like algorithms. Instead, verifiable criterion devised dynamically trigger peer-to-peer stage, so that expensive communication overhead significantly be reduced. Finally, convergence established under mild conditions.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On the Convergence of Decentralized Gradient Descent

Consider the consensus problem of minimizing f(x) = ∑n i=1 fi(x) where each fi is only known to one individual agent i belonging to a connected network of n agents. All the agents shall collaboratively solve this problem and obtain the solution via data exchanges only between neighboring agents. Such algorithms avoid the need of a fusion center, offer better network load balance, and improve da...

متن کامل

Convergence of Gradient Descent on Separable Data

The implicit bias of gradient descent is not fully understood even in simple linear classification tasks (e.g., logistic regression). Soudry et al. (2018) studied this bias on separable data, where there are multiple solutions that correctly classify the data. It was found that, when optimizing monotonically decreasing loss functions with exponential tails using gradient descent, the linear cla...

متن کامل

Convergence Analysis of Gradient Descent Stochastic Algorithms

This paper proves convergence of a sample-path based stochastic gradient-descent algorithm for optimizing expected-value performance measures in discrete event systems. The algorithm uses increasing precision at successive iterations, and it moves against the direction of a generalized gradient of the computed sample performance function. Two convergence results are established: one, for the ca...

متن کامل

Convergence properties of gradient descent noise reduction

Gradient descent noise reduction is a technique that attempts to recover the true signal, or trajectory, from noisy observations of a non-linear dynamical system for which the dynamics are known. This paper provides the first rigorous proof that the algorithm will recover the original trajectory for a broad class of dynamical systems under certain conditions. The proof is obtained using ideas f...

متن کامل

Convergence of Stochastic Gradient Descent for PCA

We consider the problem of principal component analysis (PCA) in a streaming stochastic setting, where our goal is to find a direction of approximate maximal variance, based on a stream of i.i.d. data points in R. A simple and computationally cheap algorithm for this is stochastic gradient descent (SGD), which incrementally updates its estimate based on each new data point. However, due to the ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Signal Processing Letters

سال: 2023

ISSN: ['1558-2361', '1070-9908']

DOI: https://doi.org/10.1109/lsp.2023.3279779